In this Section we introduce a second general paradigm for effective model search - or the effective search for a proper capacity model. With the first apprach discussed in the previous Section - boosting - we took a 'bottom-up' approach to fine tuning the proper amount of capacity a model needs: that is we began with a low capacity (and likely underfitting) model and then gradually increased its capacity by adding additional units (from the same family of universal approximators) until we built up 'just enough' capacity (that is the amount that minimizes validation error).
In this Section we introduce the complementary approach - called regularization. Instead of building up capacity 'starting at the bottom', with regularization we take a 'top-down' view and start off with a high capacity model - that is one which would likely overfit, providing a low training error but high validation error - and gradually decrease its capacity until the capacity is 'just right' (that is, until validation error is minimized).
Imagine we have a simple nonlinear regression problem, like the one shown in the left panel of the Figure below, and we use a single model - made up of a sum of universal approximators of a given type - with far too much capacity to try to fit this data properly. In other words, we train our high capacity model on a training portion of this data via minimizing an appropriate cost function like e.g., the Least Squares cost. In the left panel we also show a corresponding fit provided by our overfitting model in red, which wildly overfits the data.
In a high capacity model like this one we have clearly used too many and/or too flexible universal approximators (feature transformations). Equally important to diagnosing the problem of overfitting is how well we tune our model's parameters or - in other words - how well we minimize its corresponding cost function. In the present case for example, the parameter setting of our model in the middle panel that overfit our training data come from near the minimum of the model's cost function. This cost function is drawn figuratively in the right panel, where the minimum is shown as a red point. This is true in general as well - regardless of how many feature transformations we use a model will overfit a training set only when we tune its parameters well or, in other words, when we minimize its corresponding cost function well. Conversely, even if we use a high capacity model, if we do not tune its parameters well a model will not overfit its training data.
Regardless of how many feature transformations we use, in general a
modelwill overfit a training set only when we tune its parameters well or, in other words, when we minimize its corresponding cost function well. Conversely, even if we use a high capacitymodel, if we do not tune its parameters well thismodelwill not overfit its training data.
Regularization techniques for capacity tuning leverage precisely this overfitting-optimization connection, and in general work by preventing the complete minimization of a cost function associated with a high capacity model. In other words, with regularization techniques in general we use a high capacity model and tune its parameters to indirectly encourage a good validation fit by preventing complete minimization of its associated cost function (and thus preventing overfitting to the training data). Contrary to boosting techniques, where we started 'from the bottom' and built up a more flexible model piece-by-piece by adding single feature transformations to it, regularization starts 'from the top' - the 'top' being a high capacity model - and tempers its capacity by imperfectly minimizing its corresponding cost function. This is somewhat of an indirect way of getting at a good overall fitting model - since what we are doing is directly preventing overfiting to training data.
Contrary to boosting techniques, where we started 'from the bottom' and built up a flexible
modelpiece-by-piece by adding single feature transformations to it, regularization starts 'from the top' - the 'top' being a high capacitymodel- and tempers its capacity by imperfectly minimizing its corresponding cost function.
Regularization can be performed a variety of ways, but in general there are two basic categories of strategies which we will discuss: early stopping and the addition of a simple capacity-blunting function to the cost.
Here the idea is to literally stop the optimization procedure early, before reaching a minimum / overfitting occurs. This is done by measuring validation error during optimization, and (roughly speaking) halting the procedure when validation error is minimal.
As with any form of capacity-tuning, our ideal is to find a model that provides the lowest possible error on the validation set. With early-stopping we do this by stopping the minimization of our cost function (which is measuring training error) when validation error reaches its lowest point. The basic idea is illustrated in the figure below. In the left panel we show a prototypical nonlinear regression dataset, and in the middle the cost function of a high capacity model shown figuratively in two dimensions. As we begin a run of a local optimization method we measure both the training error (provided by the cost function we are minimizing) as well as validation error at each step of the procedure - as shown in the right panel. We try to halt the procedure when the validation error has reached its lowest point.
There are a number of important engineering details associated with making an effective early-stopping procedure, these include.
Below we show a few examples employing the early stopping regularization strategy.
Below we plot a prototypical nonlinear regression dataset. We will use early stopping regularization to fine tune the capacity of a model consisting of $10$ tanh neural network universal approximators.
Below we illustrate a large number of gradient descent steps to tune our high capacity model for this dataset. As you move the slider left to right you can see the resulting fit at each highlighted step of the run in the original dataset (top left), training (bottom left), and validation data (bottom right). Moving the slider to where the validation error is lowest provides - for this training / validation split of the original data - a fine nonlinear model for the entire dataset.
Below we plot a prototypical nonlinear classification dataset. We will use early stopping regularization to fine tune the capacity of a model consisting of $5$ tanh neural network universal approximators.
Below we illustrate a large number of gradient descent steps to tune our high capacity model for this dataset. As you move the slider left to right you can see the resulting fit at each highlighted step of the run in the original dataset (top left), training (bottom left), and validation data (bottom right). Moving the slider to where the validation error is lowest provides - for this training / validation split of the original data - a fine nonlinear model for the entire dataset.
By adding a simple function to the cost function we change its shape, and in particular change the location of its global minima. Since the global minima of the adjusted cost function do not allign with those of the original cost, the adjusted cost can then be completely minimized with less fear of overfitting to the training data. This method of regularization is illustrated via the animation below. In the left panel we have a prototypical single input cost function $g(w)$, in the middle a simple function - here a quadratic $w^2$ - which we will add to the cost in order to 'blunt' it, and in the right panel their linear combination $g(w) + \lambda w^2$. As we increase $\lambda > 0$ notice how the cost's global minimum moves - in this case to the left. Thus a complete minimization of the joint function will not reach the global minimum of the original cost, and overfitting is prevented.
TO BE CONTINUED....